53 research outputs found

    SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events

    Get PDF
    We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient

    Smooth-pursuit performance during eye-typing from memory indicates mental fatigue

    Get PDF
    Mental fatigue is known to occur as a result of activities related to e.g. transportation, health-care and military operations. Gaze tracking has wide-ranging applications, with the technology becoming more compact and processing power reducing. Though numerous techniques have been applied to measure mental fatigue using gaze tracking, smooth-pursuit movement, a natural eye movement generated when following a moving object with gaze, has not been explored with relation to mental fatigue. In this paper, we report the results from a smooth-pursuit movement based eye-typing experiment with varying task difficulty to generate cognitive load, performed in the morning and afternoon by 36 participants. We have investigated the effects of time-on-task and time of day on mental fatigue using self-reported questionnaires and smooth-pursuit performance, extracted from the gaze data. The self-reported mental fatigue increased due to time-on-task, but the time of day did not have an effect. The results illustrate that smooth-pursuit movement performance declined with time-on-task, with increased error in the gaze position and an inability to match the speed of the moving object. The findings exhibit the feasibility of mental fatigue detection using smooth-pursuit movements during an eye-interactive task of eye-typing

    A gaze interactive assembly instruction with pupillometric recording

    Get PDF
    This paper presents a study of a gaze interactive digital assembly instruction that provides concurrent logging of pupil data in a realistic task setting. The instruction allows hands-free gaze dwells as a substitute for finger clicks, and supports image rotation as well as image zooming by head movements. A user study in two LEGO toy stores with 72 children showed it to be immediately usable by 64 of them. Data logging of view-times and pupil dilations was possible for 59 participants. On average, the children spent half of the time attending to the instruction (S.D. 10.9%). The recorded pupil size showed a decrease throughout the building process, except when the child had to back-step: a regression was found to be followed by a pupil dilation. The main contribution of this study is to demonstrate gaze-tracking technology capable of supporting both robust interaction and concurrent, non-intrusive recording of gaze- and pupil data in-the-wild. Previous research has found pupil dilation to be associated with changes in task effort. However, other factors like fatigue, head motion, or ambient light may also have an impact. The final section summarizes our approach to this complexity of real-task pupil data collection and makes suggestions for how future applications may utilize pupil information

    Context-Aware Sensing and Implicit Ground Truth Collection: Building a Foundation for Event Triggered Surveys on Autonomous Shuttles: Artikel

    Get PDF
    The LINC project aims to study interactions between passengers and autonomous vehicles in natural settings at the campus of Technical University of Denmark. To leverage the potential of IoT components in smartphone-based surveying, a system to identify specific spatial, temporal and occupancy contexts relevant for passengers’ experience was proposed as a central data collection strategy in the LINC project. Based on predefined contextual triggers specific questionnaires can be distributed to affected passengers. This work focuses on the data-based discrimination between two fundamental contexts for LINC passengers: be-in and be-out (BIBO) of the vehicle. We present empirical evidence that Bluetooth-low-energy beacons (BLE) have the potential for BIBO independent classification. We compare BLE with other smartphone onboard sensors, such as the global positioning system (GPS) and the accelerometer through: (i) random-forest (RF); (ii) multi-layer perceptron (MLP); and (iii) smartphone native off-the-shelve classifiers. We also perform a sensitivity analysis regarding the impact that faulty BIBO ground-truth has on the performance of the supervised classifiers (i) and (ii). Results show that BLE and GPS could allow reciprocal validation for BIBO passengers’ status. This potential might lift passengers from providing any further validation. We describe the smartphone-sensing platform deployed to gather the dataset used in this work, which involves passengers and autonomous vehicles in a realistic setting

    Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input

    Get PDF
    Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation

    Project Half Double: Preliminary Results for Phase 1, June 2016

    Get PDF
    Project Half Double has a clear mission to succeed in finding a project methodology that can increase the success rate of our projects while increasing the speed at which we generate new ideas and develop new products and services. Chaos and complexity should be seen as a basic condition and as an opportunity rather than a threat and a risk. We are convinced that by doing so, we can strengthen Denmark’s competitiveness and play an important role in the battle for jobs and future welfare. The overall goal is to deliver “projects in half the time with double the impact”, where projects in half the time should be understood as half the time to impact (benefit realisation, effect is achieved) and not as half the time for project execution.The purpose of Project Half Double is to improve Danish industrial competitiveness by radically increasing the pace and impact of the development and innovation activities carried out within the framework of the projects.The formal part of Project Half Double was initiated in June 2015. We started out by developing, refining and testing the Half Double methodology on seven pilot projects in the first phase of the project, which will end June 2016.The current status of responding to the above overall Project Half Double goal for the seven pilot projects can be summarised as follows:- The Lantmännen Unibake pilot project was able to launch the first stores after 5 months, which is considerably shorter lead time than comparable reference projects, which have had a lead time of 10 months or more. This is in line with the overall goal of Project Half Double of delivering impact faster.- Four pilot projects have the potential to deliver impact faster, but it is too early to evaluate. Some results might be evaluated in the second half of 2016, while other results take longer to evaluate (Coloplast, Novo Nordisk, GN Audio and VELUX).- Two pilot projects will probably not be able to deliver impact faster, although it is too early to evaluate them. The evaluation of these pilot projects takes place over a longer period of time as it will take years before many of the key performance indicators associated with them can be evaluated (Grundfos and Siemens Wind Power).In addition to the current status of delivering impact faster for the seven pilot projects, it is important to highlight that Project Half Double phase 1 has planted many seeds in the pilot organisations concerning project methodology and beyond. The many learning points from each pilot project show that Project Half Double has left its clear footprint in the pilot organisations, and that the Half Double methodology has evolved and developed very much during Project Half Double phase 1
    corecore